Goto

Collaborating Authors

 signal function


Robust signal decompositions on the circle

Kose, Aral, Liberzon, Daniel

arXiv.org Artificial Intelligence

Imagine an agent moving along a circular path in the plane with some stationary landmarks, whose number and exact locations are unknown to the agent. Suppose that each landmark transmits an omnidirectional signal with a finite range, which we can model as a function that equals 1 inside a circular disk centered at the landmark and 0 outside. The boundaries of these disks, whose radii are in general different, may intersect the agent's path at one or two points or not at all. As the agent moves along its path, it can perceive these signals and so it knows, at each point, the number of landmarks that are within range. It cannot, however, identify different landmarks by their signals, and neither can it discern anything about each signal's strength other than its presence or absence. The agent's knowledge of its position on the circle may also not be precise, and the signal transmissions or measurements may occur with some sampling frequency rather than continuously in time. For these reasons, all that the agent can reliably reconstruct is a sequence of nonnegative integers corresponding to local landmark counts around the circle, and it may not be sure of the precise count at the exact points where this count changes. In this scenario, we want to pose the following questions: Can the agent figure out the total number of landmarks (excluding, of course, those whose signals do not reach any points on the circle)?


How to Detect Unauthorized Data Usages in Text-to-image Diffusion Models

Wang, Zhenting, Chen, Chen, Liu, Yuchen, Lyu, Lingjuan, Metaxas, Dimitris, Ma, Shiqing

arXiv.org Artificial Intelligence

Recent text-to-image diffusion models have shown surprising performance in generating high-quality images. However, concerns have arisen regarding the unauthorized usage of data during the training process. One example is when a model trainer collects a set of images created by a particular artist and attempts to train a model capable of generating similar images without obtaining permission from the artist. To address this issue, it becomes crucial to detect unauthorized data usage. In this paper, we propose a method for detecting such unauthorized data usage by planting injected memorization into the text-to-image diffusion models trained on the protected dataset. Specifically, we modify the protected image dataset by adding unique contents on the images such as stealthy image wrapping functions that are imperceptible to human vision but can be captured and memorized by diffusion models. By analyzing whether the model has memorization for the injected content (i.e., whether the generated images are processed by the chosen post-processing function), we can detect models that had illegally utilized the unauthorized data. Our experiments conducted on Stable Diffusion and LoRA model demonstrate the effectiveness of the proposed method in detecting unauthorized data usages. Recently, text-to-image diffusion models have showcased outstanding capabilities in generating a wide range of high-quality images.


Temporal Reasoning About Uncertain Worlds

Hanks, Steve

arXiv.org Artificial Intelligence

We present a program that manages a database of temporally scoped beliefs. The basic functionality of the system includes maintaining a network of constraints among time points, supporting a variety of fetches, mediating the application of causal rules, monitoring intervals of time for the addition of new facts, and managing data dependencies that keep the database consistent. At this level the system operates independent of any measure of belief or belief calculus. We provide an example of how an application program mi9ght use this functionality to implement a belief calculus.


Optimal Signalling in Attractor Neural Networks

Meilijson, Isaac, Ruppin, Eytan

Neural Information Processing Systems

It is well known that a given cortical neuron can respond with a different firing pattern for the same synaptic input, depending on its firing history and on the effects of modulator transmitters (see [Connors and Gutnick, 1990] for a review). The time span of different channel conductances is very broad, and the influence of some ionic currents varies with the history of the membrane potential [Lytton, 1991]. Motivated by the history-dependent nature of neuronal firing, we continue.our


Optimal Signalling in Attractor Neural Networks

Meilijson, Isaac, Ruppin, Eytan

Neural Information Processing Systems

It is well known that a given cortical neuron can respond with a different firing pattern for the same synaptic input, depending on its firing history and on the effects of modulator transmitters (see [Connors and Gutnick, 1990] for a review). The time span of different channel conductances is very broad, and the influence of some ionic currents varies with the history of the membrane potential [Lytton, 1991]. Motivated by the history-dependent nature of neuronal firing, we continue.our


Optimal Signalling in Attractor Neural Networks

Meilijson, Isaac, Ruppin, Eytan

Neural Information Processing Systems

It is well known that a given cortical neuron can respond with a different firing pattern forthe same synaptic input, depending on its firing history and on the effects of modulator transmitters (see [Connors and Gutnick, 1990] for a review). The time span of different channel conductances is very broad, and the influence of some ionic currents varies with the history of the membrane potential [Lytton, 1991]. Motivated bythe history-dependent nature of neuronal firing, we continue .our


History-Dependent Attractor Neural Networks

Meilijson, Isaac, Ruppin, Eytan

Neural Information Processing Systems

We present a methodological framework enabling a detailed description of the performance of Hopfield-like attractor neural networks (ANN) in the first two iterations. Using the Bayesian approach, we find that performance is improved when a history-based term is included in the neuron's dynamics. A further enhancement of the network's performance is achieved by judiciously choosing the censored neurons (those which become active in a given iteration) on the basis of the magnitude of their post-synaptic potentials. The contribution of biologically plausible, censored, historydependent dynamics is especially marked in conditions of low firing activity and sparse connectivity, two important characteristics of the mammalian cortex. In such networks, the performance attained is higher than the performance of two'independent' iterations, which represents an upper bound on the performance of history-independent networks.


History-Dependent Attractor Neural Networks

Meilijson, Isaac, Ruppin, Eytan

Neural Information Processing Systems

We present a methodological framework enabling a detailed description of the performance of Hopfield-like attractor neural networks (ANN) in the first two iterations. Using the Bayesian approach, we find that performance is improved when a history-based term is included in the neuron's dynamics. A further enhancement of the network's performance is achieved by judiciously choosing the censored neurons (those which become active in a given iteration) on the basis of the magnitude of their post-synaptic potentials. The contribution of biologically plausible, censored, historydependent dynamics is especially marked in conditions of low firing activity and sparse connectivity, two important characteristics of the mammalian cortex. In such networks, the performance attained is higher than the performance of two'independent' iterations, which represents an upper bound on the performance of history-independent networks.


History-Dependent Attractor Neural Networks

Meilijson, Isaac, Ruppin, Eytan

Neural Information Processing Systems

We present a methodological framework enabling a detailed description ofthe performance of Hopfield-like attractor neural networks (ANN) in the first two iterations. Using the Bayesian approach, wefind that performance is improved when a history-based term is included in the neuron's dynamics. A further enhancement of the network's performance is achieved by judiciously choosing the censored neurons (those which become active in a given iteration) onthe basis of the magnitude of their post-synaptic potentials. Thecontribution of biologically plausible, censored, historydependent dynamicsis especially marked in conditions of low firing activity and sparse connectivity, two important characteristics of the mammalian cortex. In such networks, the performance attained ishigher than the performance of two'independent' iterations, whichrepresents an upper bound on the performance of history-independent networks.